Scripts - 28/11/2024
1 Packages and directory
2 Import and Manipulate
2.1 Import and Display
2.1.1 Import Images
img <- image_import("20231001_153711.jpg")
plot(img)
To import a list of images, use a vector of image names or the pattern argument. In the latter case, all images matching the specified pattern will be imported into a list.
img_list1 <- image_import(c("20231001_153711.jpg", "20231001_160425.jpg"))2.1.2 Display Images
Individual images are displayed with plot(). To combine images, use the image_combine() function. Users can input a comma-separated list of objects or a list of objects of the Image class.
# Individual images
plot(img)
# Combine images
image_combine(img_list1)
pliman provides a set of image_*() functions for performing image manipulation and transformation of single images or a list of images based on the EBImage package.
2.2 Resolution
2.2.1 Image Resolution (DPI)
The dpi() function performs an interactive function to calculate the image resolution based on a known distance entered by the user. To calculate the image resolution (DPI), the user should use the left mouse button to create a known distance line. This can be done, for example, using a known distance model, as follows.
# this only works in an interactive section
(imgres <- dpi(img))2.2.2 Resize an Image
Sometimes, it is necessary to resize high-resolution images to reduce computational effort and processing time. The image_resize() function is used to resize an image. The rel_size argument can be used to resize the image by relative size. For example, setting rel_size = 50 for a 1280 x 720 pixels image, the new image will have a size of 640 x 360 pixels.
image_dimension(img)── Image dimension ─────────────────────────────────────────────────────────────
Width : 799
Height: 600
img_resized <- image_resize(img, rel_size = 50)
image_dimension(img_resized)── Image dimension ─────────────────────────────────────────────────────────────
Width : 400
Height: 300
2.3 Apply a Function to Images
apply_fun_to_imgs(pattern = "2023",
image_resize,
rel_size = 50,
dir_processed = "smaller",
plot = FALSE)── Sequential processing of 6 images ───── Started on "2025-08-31 | 20:58:04" ──
── Function `image_resize()` successfully applied to the images ────────────────
2.4 Export
To export images to the current directory, use the image_export() function. If a list of images is exported, the images will be saved considering the name and extension present in the list. If no extension is present, the images will be saved as *.jpg files.
image_export(img, "img_exported.jpg")
# or a subfolder
image_export(img, "test/img_exported.jpg")3 Principles of Image Analysis
Image analysis using threshold is a technique that involves setting a threshold value for pixel intensity in an image. This technique is used to segment the image into two distinct classes: one class represents what is above the threshold (foreground) and the other class represents what is below the threshold (background). The process results in a binary image, where the objects of interest are highlighted in white on a black background. This approach is widely used in various applications, such as edge …
3.1 Image Indexes
The image_index() function constructs image indices using Red, Green, Blue, Red-Edge, and NIR bands.
# Calculate indices
indexes <- image_index(img, index = c("R, G, B, GRAY, L, B-G/(B+G)"))ℹ Index "B-G/(B+G)" is not available. Trying to compute your own index.

# Create a histogram with RGB values
plot(indexes, type = "density")
In the case of the R index, the two peaks represent the leaf + reference (smaller peak) and the background (larger peak). The clearer the difference between these peaks, the better the image segmentation.
3.2 Binary Images
To segment objects, pliman uses the threshold technique (Otsu, 1979)1, i.e., a cutoff point (considering pixel values) is chosen and the image is classified into two classes (foreground and background). Then, we have a binary image. We can produce this image with image_binary(). This binarization is the key process for all object analysis steps. The better t…
Note that some leaf pixels were considered background and some background pixels were considered foreground. We can improve this binarization by applying a morphological operation (such as median filter or oppening) and filling the holes with fill_hull = TRUE. See how changing the filter argument impacts the results.
bin <- image_binary(img,
index = "R",
fill_hull = TRUE,
plot = FALSE)[[1]]
bin2 <- image_binary(img,
index = "R",
fill_hull = TRUE,
filter = 10,
plot = FALSE)[[1]]
bin3 <- image_binary(img,
index = "R",
fill_hull = TRUE,
opening = 10,
plot = FALSE)[[1]]
image_combine(bin, bin2, bin3, ncol = 3)
3.3 Segmentation
In pliman, the following functions can be used to segment objects in images.
image_segment() to produce a segmented image (objects in the image and a white background). image_segment_iter() to segment an image interactively. image_segment_kmeans() to segment an image using the k-means algorithm. image_segment_manual() to segment an image manually. image_segment_mask() to segment an image with a mask.
Both functions segment the image based on the value of some image index, which can be one of the RGB channels or any operation with these channels.
3.3.1 Image Indices
image_segment(img,
index = "R",
fill_hull = TRUE,
opening = 5)
3.4 Object Analysis
The key is to obtain the contour of the objects, so we work with polygons!
A ‘polygon’ is a plane figure described by a finite number of straight line segments connected to form a closed chain (Singer, 1993)2.
We can then conclude that image objects can be expressed as polygons with n vertices. Pliman has a family of poly_*() functions that can be used to analyze polygons.
square <- draw_square() |> poly_close()
poly_area(square)[1] 4
poly_perimeter(square)[1] 8
polygon <- draw_n_tagon(6)
poly_area(polygon)[1] 2.598076
n <- c(6, 10, 50, 100, 1000, 100000)
sapply(n, function(x){
draw_n_tagon(x) |> poly_area()
})





[1] 2.598076 2.938926 3.133331 3.139526 3.141572 3.141593
3.4.1 Contour
img <- image_import("leaves.jpg")
plot(img)
# extract the contour
cont <- object_contour(img, index = "B", watershed = FALSE)
# Number of contour pixels
nrow(cont[[1]])[1] 980
# contour coordinates
head(cont[[1]]) [,1] [,2]
[1,] 194 24
[2,] 194 25
[3,] 193 26
[4,] 192 26
[5,] 191 26
[6,] 190 26
# polygon
plot_polygon(cont[[1]])
3.4.2 Measures
In the current version of pliman, you can calculate the following measures. For more details, see Chen & Wang (2005)3, Claude (2008)4, and Montero et al. (2009)5.
- Area
The area of a shape is calculated using the Shoelace formula (Lee and Lim, 2017)6, as follows:
\[ A=\frac{1}{2}\left |\sum_{i=1}^{n}\left(x_{i} y_{i+1}-x_{i+1}y_{i}\right)\right| \]
poly_area(cont)[1] 60631.0 44430.5 44580.5 38990.0 93424.0 76965.5
- Perimeter
The perimeter is calculated as the sum of the Euclidean distance between all points of a shape. The distances can be obtained with poly_distpts().
poly_perimeter(cont) 1 2 3 5 11 14
1153.798 2330.010 1139.602 1215.323 1188.651 1266.621
# perimeter of a circle with radius 2
circle <- draw_circle(radius = 2, plot = FALSE)
poly_perimeter(circle)[1] 12.56635
# check the result
2*pi*2[1] 12.56637
- Center of mass
The center of mass of a shape, especially in two-dimensional space, represents the average position of all the points within that shape, weighted by their area (or mass if considering physical objects). It’s the point at which the entire area (or mass) of the shape can be thought to be concentrated. In practical terms, if you were to balance a cut-out of the shape on a pinpoint, the center of mass is the location where it would balance perfectly.
In a polygon (a shape made of straight-line segments), the center of mass is calculated by considering each segment’s contribution to the overall shape, and its coordinates (\(C_x\) and \(C_y\)) are given by
\[ \begin{aligned} C_x & =\frac{1}{6 A} \sum_{i=1}^n\left(x_i+x_{i+1}\right)\left(x_i y_{i+1}-x_{i+1} y_i\right) \\ C_y & =\frac{1}{6 A} \sum_{i=1}^n\left(y_i+y_{i+1}\right)\left(x_i y_{i+1}-x_{i+1} y_i\right) \end{aligned} \] Where A, is the area given above.
plot_polygon(cont[[1]])
# centroid
cent <- apply(cont[[1]], 2, mean)
points(cent[1], cent[2], col = "red", pch = 19) # Red dot for centroid
# Center of mass
cm <- poly_mass(cont[[1]])
points(cm[1], cm[2], col = "blue", pch = 19) # Blue dot for center of mass
legend("topright",
legend = c("Centroid", "Center of Mass"),
col = c("red", "blue"), pch = 19)
- Radius
The radius of a pixel in the object’s contour is calculated as its distance to the center of mass of the object. These distances can be obtained with poly_centdist().
dist <- poly_centdist_mass(cont[[1]])
x <- c(cm[1], cont[[1]][1, 1])
y <- c(cm[2], cont[[1]][1, 2])
d1 <- sqrt(diff(x)^2 + diff(y)^2)
dist[[1]][1] 178.4624
plot_polygon(cont[[1]])
points(cm[1], cm[2], col = "blue", pch = 19) # Blue dot for center of mass
segments(x[1], y[1], x[2], y[2], col = "blue", lwd = 2)
plot(dist, type = "l")
- Length and Width
The length and width of an object are calculated with poly_lw(), as the difference between the maximum and minimum of the x and y coordinates after the object has been aligned with poly_align().
# wrong measures
plot_polygon(cont[[1]])
lw <- apply(cont[[1]], 2, \(x){range(x)})
abline(v = lw[[1]], col = "red")
abline(v = lw[[2]], col = "red")
abline(h = lw[[3]], col = "blue")
abline(h = lw[[4]], col = "blue")
# Correct measures
aligned <- poly_align(cont[[1]])
lw <- apply(aligned, 2, \(x){range(x)})
abline(v = lw[[1]], col = "red")
abline(v = lw[[2]], col = "red")
abline(h = lw[[3]], col = "blue")
abline(h = lw[[4]], col = "blue")
diff(lw) [,1] [,2]
[1,] 190.3641 475.6255
# with poly_lw()
poly_lw(cont[[1]]) length width
[1,] 475.6255 190.3641
- Circularity and Elongation
Circularity (Montero et al. 2009)7 is also called shape compactness or a measure of the roundness of an object. It is given by \(C = P^2 / A\), where \(P\) is the perimeter and \(A\) is the area of the object.
poly_perimeter(cont) ^2 / poly_area(cont) 1 2 3 5 11 14
21.95659 122.18968 29.13142 37.88176 15.12343 20.84479
poly_circularity(cont) 1 2 3 5 11 14
21.95659 122.18968 29.13142 37.88176 15.12343 20.84479
As the above measurement depends on scale, normalized circularity can be used. In this case, it is assumed that a perfect circle has a circularity equal to 1. This measure is invariant under translation, rotation, and scale transformations, given by \(Cn = P^2 / 4 \pi A\)
poly_perimeter(circle) ^2 / (4 * pi * poly_area(circle))[1] 1.000003
poly_circularity_norm(circle)[1] 0.9999967
poly_circularity_norm(cont) 1 2 3 5 11 14
0.5723279 0.1028431 0.4313683 0.3317261 0.8309208 0.6028541
poly_elongation() Calculates the elongation of an object as 1 - width / length
poly_elongation(circle) [,1]
[1,] -1.236173e-06
poly_elongation(cont) [,1]
[1,] 0.59976047
[2,] 0.07475182
[3,] 0.47262292
[4,] 0.77301835
[5,] 0.15421772
[6,] 0.34880080
- Perimeter Complexity (PVC)
The PVC is first calculated by smoothing the input contour using a specified number of iterations. The smoothed contour is then used to calculate the distances between corresponding points in the original and smoothed coordinates. These distances reflect the variations in contour shape after smoothing. The sum of these distances represents the global magnitude of the variations. Next, the sum of the distances is multiplied by the standard deviation of the distances to capture the dispersion or spread of th…
4 Object Counting and Measurements
The analyze_objects() function is the key function in pliman to calculate a range of measurements that can be used to study the shape and texture of objects, such as leaves. In the following example, I show how to plot the length and width of each leaf in the following image.
leaves <- image_import("flax2.jpg", plot = TRUE)
image_index(leaves)
leaves_meas <-
analyze_objects(leaves,
# show_lw = TRUE,
marker = "id",
index = "B")ℹ Processing a single image. Please, wait.
✔ Image Successfully analyzed! [1.9s]
# plot width and length
plot_measures(leaves_meas,
measure = "width",
col = "green",
hjust = -90)
plot_measures(leaves_meas,
measure = "length",
vjust = 60,
col = "red")
5 Measurement Correction
5.1 Known Resolution
dpi(leaves)
corrected <- get_measures(leaves_meas, dpi = 416)
# plot width and length
plot_measures(corrected,
measure = "width",
col = "green",
hjust = -90)
plot_measures(corrected,
measure = "length",
vjust = 60,
col = "red")5.2 Reference Object (color)
The reference argument can now be used to correct the measurements of objects even when images with different capture distances are used. This differs from the previous example in a subtle but crucial aspect: when reference is informed, batch processing can be used! In this example, the leaf area of the leaves image is quantified and corrected considering a 4 x 5 (20 cm\(^2\)) rectangle as a reference object. When reference = TRUE is informed in analyze_objects(), the function will perform a two-step object segmentation process:
The first step consists of segmenting the foreground (leaves and reference object) from the background. For this, an image index is used and can be declared in the back_fore_index argument. The default (back_fore_index = "R/(G/B)") is optimized to segment white backgrounds of green leaves and a blue reference object. Let’s see the performance of this index in this example.
ind <- image_index(leaves, index = "R/(G/B)", plot = FALSE)[[1]]ℹ Index "R/(G/B)" is not available. Trying to compute your own index.
bin <- image_binary(leaves, index = "R/(G/B)", plot = FALSE)[[1]]ℹ Index "R/(G/B)" is not available. Trying to compute your own index.
image_combine(ind, bin)
# Segmenting the image
seg1 <- image_segment(leaves, index = "R/(G/B)")
Good job! Now, we remove the background. The next step is to segment the objects and the reference model. Basically, we need to repeat the previous step isolating the reference.
image_segment(seg1, "B-R")
seg2 <-
image_binary(seg1,
index = "B-R")ℹ Index "B-R" is not available. Trying to compute your own index.

[1] 517741
Now that we know the indices to be used for each segmentation, we can use the analyze_objects function to obtain the corrected measurements based on the reference object.
res2 <-
analyze_objects(leaves,
index = "B",
reference = TRUE,
reference_area = 20,
back_fore_index = "R/(G/B)", # default
fore_ref_index = "B-R", # default
marker = "width")ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [2.5s]
5.3 Reference Object (size)
A second option for correcting the measurements is to use a reference object that is smaller or larger than all the objects in the image. When this is the case, the reference_larger and reference_smaller arguments can be used to indicate when the largest/smallest object in the image should be used as a reference object. This is only valid when reference is set to TRUE and reference_area indicates the area of the reference object. IMPORTANT. When reference_smaller is used, objects with an area smaller than 1% of the average of all objects are ignored. This is used to remove possible noise in the image, such as dust. Therefore, make sure that the reference object has an area that will not be removed by this cutoff point.
flaxref <- image_import("flax_ref.jpg", plot = TRUE)
res2 <-
analyze_objects(flaxref,
index = "GRAY",
reference = TRUE,
reference_area = 6,
reference_larger = TRUE,
show_contour = FALSE,
marker = "point")ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [810ms]
plot(res2)
image_view(flaxref, object = res2)Warning in CPL_crs_from_input(x): GDAL Message 1: +init=epsg:XXXX syntax is
deprecated. It might return a CRS with a non-EPSG compliant axis order. Further
messages of this type will be suppressed.
Warning: Found less unique colors (5) than unique zcol values (44)!
Interpolating color vector to match number of zcol values.
6 Batch Processing
In image analysis, it is often necessary to process more than one image. In Pliman, batch processing can be done when the user specifies the pattern argument. This should indicate the file name pattern used to identify images to be imported. For example, if pattern = "im", all images in the current working directory whose names match the pattern (e.g., img1, image1, im2) will be processed. Providing any number as a pattern (e.g., pattern = "1") will select images that contain any number in their name. An error will be returned if the pattern matches any unsupported file (e.g., img1.pdf).
If users need to analyze multiple images of the same sample, the images should share the same file name prefix, defined as the part of the file name preceding the first hyphen (-) or underscore (_). Then, when using get_measures(), measurements from leaf images called, for example, F1-1.jpeg, F1-2.jpeg, and F1-3.jpeg will be combined into a single image (F1), displayed in the merged object. This is useful, for example, to analyze large leaves that need to be split into multiple images or multiple leaves belonging to the same sample that cannot be scanned in a single image.
In the following example, 36 images will be analyzed. These images are in the ‘linhaca’ folder and contain flax leaves from 12 evaluation dates, with three repetitions8. Note that to ensure all images are processed, all images must share a common pattern, in this case (“A”). Here, I will use pattern = "A" to indicate that all images with this name pattern should be merged.
res2 <-
analyze_objects_minimal(pattern = "Cáp",
index = "GRAY",
show_contour = FALSE,
marker = "point",
marker_col = "red",
dir_original = "capsulas",
save_image = TRUE,
dir_processed = "capsulas/proc",
opening = 15)── Analyzing 6 images ─────────────────────────────────── Started at 20:58:33 ──
ℹ Directory: 'D:/htp_cbmp2025/capsulas'

⠙ ■■■■■■■■■■■ 2/6 | ETA: 5s | "P2_Cápsulas_1_2024-11-26-1…


⠹ ■■■■■■■■■■■■■■■■■■■■■ 4/6 | ETA: 3s | "P4_Cápsulas_1_2024-11-26-1…

⠸ ■■■■■■■■■■■■■■■■■■■■■■■■■■ 5/6 | ETA: 2s | "P5_Cápsulas_1_2024-11-26-1…

⠸ ■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 6/6 | ETA: 0s | "P6_Cápsulas_1_2024-11-26-1…

┌ Global statistics ──────────────────────────────────────────────┐
│ │
│ Total objects: 10361 Total area: 3727872 │
│ Overall mean area: 359.8 Overall SD: 111.57 │
│ Min area: 96 Max area: 7825 │
│ │
└──────────────────────────────────────────────────────────────────┘
┌ Across-image statistics (per-image averages) ────────────────────┐
│ │
│ Avg objects: 1726.83 Avg sum area: 621312 │
│ Min objects: 1176 Max objects: 2042 │
│ Avg area: 360.7 Avg SD area: 86.13 │
│ Min mean area: 275.41 Max mean area: 413.46 │
│ │
└─────────────────────────────────────────────── Based on 6 images ┘
── Processing successfully finished ──────────────── on 2025-08-31 | 20:58:48 ──
7 Spectral profile of objects
7.1 Flax leaves
To obtain the RGB/HSV intensity of each object in the image, use the argument object_rgb = TRUE in the function analyze_objects().In the following example, we use the R, G, and B bands and their normalized values.The function pliman_indexes() returns the indexes available in the package.
To calculate a specific index, simply insert a formula containing the values of R, G, or B (e.g., object_index = “B/G+R”).
img <- image_import("flax.jpg", plot = TRUE)
plot(img)
(indx <- pliman_indexes_rgb()) [1] "B" "BGI" "BI" "BI2" "CI" "CIVE" "EGVI" "ERVI" "G"
[10] "GB" "GD" "GLAI" "GLI" "GR" "GRAY" "GRAY2" "HI" "HUE"
[19] "HUE2" "I" "L" "MGVRI" "NB" "NG" "NGBDI" "NGRDI" "NR"
[28] "R" "RB" "RI" "S" "SAVI" "SCI" "SHP" "SI" "VARI"
[37] "BCC" "BRVI" "GCC" "GRVI2" "IPCA" "MVARI" "NDI" "RCC" "RGBVI"
[46] "TGI" "VEG" "vNDVI" "WI"
flax_leaves <-
analyze_objects(img,
index = "B",
opening = 5,
object_index = c("DGCI", "CIVE", "ERVI", "EGVI", "R", "G", "B"),
pixel_level_index = TRUE,
marker = "id")ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [3.6s]
# PCA with the indexes
ind <- summary_index(flax_leaves, type="var")Warning in sqrt(eigenvalue): NaNs produzidos

Now, let’s plot the DGCI (Dark Green Color Index) on each object. The DGCI is based on the HSB (Hue, Saturation, and Brightness) spatial color and has been used as an indicator of green shade 9
image_view(img,
object = flax_leaves,
color_regions = custom_palette(c("yellow", "darkgreen")),
attribute = "DGCI")ℹ Using downsample = 2 so that the number of rendered pixels approximates max_pixels.
Warning: Found less unique colors (5) than unique zcol values (170)!
Interpolating color vector to match number of zcol values.
7.2 Color Texture
Definitions and interpretations of various texture features calculated from the Gray Level Co-occurrence Matrix (GLCM).
ASM (Angular Second Moment):
Definition: Measures the uniformity or energy of the texture, calculated as the sum of squared elements in the GLCM.
Interpretation: Higher values indicate a more uniform texture; lower values suggest more variation.
CON (Contrast):
Definition: Measures local variations in the GLCM, calculating intensity contrast between a pixel and its neighbor.
Interpretation: High values indicate textures with sharp edges or strong intensity variations.
COR (Correlation):
Definition: Assesses the linear dependency of gray levels between neighboring pixels, measuring how correlated a pixel is to its neighbors.
Interpretation: High values indicate strong correlation, suggesting a predictable texture pattern.
VAR (Variance):
Definition: Measures the dispersion of gray levels in the GLCM, quantifying how much gray levels differ from the mean.
Interpretation: High variance indicates a wide range of intensity values, suggesting a more complex texture.
IDM (Inverse Difference Moment) or Local Homogeneity:
Definition: Measures the homogeneity of the texture, assigning higher weights to smaller gray-level differences.
Interpretation: Higher values indicate a more homogenous texture.
SAV (Sum Average):
Definition: Calculates the average of the sums of gray levels in the GLCM.
Interpretation: Reflects the average intensity of pixel pairs.
SVA (Sum Variance):
Definition: Measures the variability of the sum distribution in the GLCM.
Interpretation: High values indicate a wide spread of the sum distribution.
SEN (Sum Entropy):
Definition: Measures the randomness of the sum distribution in the GLCM.
Interpretation: High values indicate high randomness in the texture.
DVA (Difference Variance):
Definition: Measures the variability of the difference distribution in the GLCM.
Interpretation: High values suggest varied and complex texture patterns.
DEN (Difference Entropy):
Definition: Measures the randomness of the difference distribution in the GLCM.
Interpretation: High values indicate high unpredictability in the texture differences.
F12 (Difference Variance):
Definition: Another representation of Difference Variance, measuring the spread of differences in gray levels.
F13 (Angular Second Moment):
Definition: Another representation of ASM, measuring the uniformity of the texture.
These features help in analyzing textures by quantifying uniformity, contrast, and randomness, crucial in applications like image classification and pattern recognition.
imgtest <-
image_import(c("beans/G166.jpg",
"beans/G799.jpg"),
plot = TRUE)
# Gray images
image_index(imgtest[[1]], "GRAY")
image_index(imgtest[[2]], "GRAY")
# Angular Second Moment
res <-
lapply(imgtest, function(x){
analyze_objects(x,
index = "B-R",
haralick = TRUE, # texture features
har_band = "GRAY",
marker = "dva",
marker_col = "green",
marker_size = 3,
opening = 3,
watershed = FALSE)
})ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [364ms]
ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [453ms]
# Batch Processing
res <-
analyze_objects(pattern = "G",
dir_original = "beans",
index = "B-R",
haralick = TRUE, # texture features
marker_col = "green",
opening = 10,
watershed = FALSE,
object_index = c("L", "a", "b*"),
parallel = TRUE)── Parallel processing using 3 cores ─────── Started on 2025-08-31 | 20:59:02 ──
ℹ Processing 55 images found on 'D:/htp_cbmp2025/beans'. Please, wait.
███████████████████████████████ 2% | ETA: 13m
███████████████████████████████ 40% | ETA: 24s
███████████████████████████████ 60% | ETA: 13s
███████████████████████████████ 82% | ETA: 5s
███████████████████████████████ 100% | ETA: 0s
ℹ Processing 55 images found on 'D:/htp_cbmp2025/beans'. Please, wait.
✔ Batch processing finished [26.9s]
⠙ Binding the results.
┌ Global statistics ──────────────────────────────────────────────┐
│ │
│ Total objects: 281 Total area: 7950970 │
│ Overall mean area: 28295.27 Overall SD: 9217.13 │
│ Min area: 6358 Max area: 51688 │
│ │
└──────────────────────────────────────────────────────────────────┘
┌ Across-image statistics (per-image averages) ────────────────────┐
│ │
│ Avg objects: 5.11 Avg sum area: 144563.09 │
│ Min objects: 5 Max objects: 9 │
│ Avg area: 28465.66 Avg SD area: 4603.52 │
│ Min mean area: 8219 Max mean area: 40847.6 │
│ │
└─────────────────────────────────────────────── Based on 0 images ┘
── Processing successfully finished ──────────────── on 2025-08-31 | 20:59:29 ──
⠙ Binding the results.
✔ Binding the results. [237ms]
dfpca <-
left_join(res$results, res$object_index) |>
select(img, L, a, `b*`, ent) |>
group_by(img) |>
summarise(across(where(is.numeric), mean)) |>
column_to_rownames("img")Joining with `by = join_by(img, id)`
Welcome! Want to learn more? See two factoextra-related books at https://goo.gl/ve3WBa
library(FactoMineR)
a <- metan::clustering(dfpca, scale = TRUE)Registered S3 method overwritten by 'GGally':
method from
+.gg ggplot2
fviz_dend(a$hc, k = 5)Warning: The `<scale>` argument of `guides()` cannot be `FALSE`. Use "none" instead as
of ggplot2 3.3.4.
ℹ The deprecated feature was likely used in the factoextra package.
Please report the issue at <https://github.com/kassambara/factoextra/issues>.

pcam <- PCA(dfpca, graph = FALSE)
fviz_pca_biplot(pcam, repel = TRUE)
img <- image_import("feijoes.jpg", resize = 50)
scatt <-
object_scatter(
img,
index = "B-R",
watershed = FALSE,
erosion = 15,
filter = 10,
object_index = c("L", "a", "b*"),
x = "L",
y = "ent",
haralick = TRUE,
show_id = FALSE,
xlab = "Luminosidade",
ylab = "Entropia da GLCM",
scale = 0.15,
xy_ratio = 1.5
)ℹ Getting cached data...
✔ Getting cached data... [2.1s]
ℹ Putting objects in their positions...

✔ Putting objects in their positions... [303ms]
8 Case Studies
8.1 Pollen Counting and Viability
Image available in this discussion
img <- image_import("pollen.jpg", plot = TRUE)
res <-
analyze_objects(img,
filter = 2,
tolerance = 0.5,
lower_noise = 0.3,
show_contour = FALSE,
index = "L*")ℹ Processing a single image. Please, wait.
✔ Image Successfully analyzed! [1.1s]
size <- res$results
ids <- size[size$area > 580, ]
ids2 <- size[size$area <= 580, ]
points(ids$x, ids$y, pch = 16)
points(ids2$x, ids2$y, pch = 16, col = "yellow")
legend("top",
c("Viable", "Not Viable"),
pch = 16,
ncol = 2,
col = c("black", "yellow"))
prop <- nrow(ids) / (nrow(ids) + nrow(ids2)) * 100
text(1020, -100,
labels = paste0("Count:", res$statistics$value[1]))
text(1100, -60,
labels = paste0("Viable Pollen: ", round(prop, 3), "%"))
8.2 Counting Corn Kernels on Cobs
img <- image_import("maize.jpg", plot = TRUE)
crop <-
img |>
image_crop(height = 52:1006,
plot = TRUE)
res <-
analyze_objects(crop,
filter = 10,
index = "R",
show_lw = TRUE,
invert = TRUE,
width_at = TRUE,
watershed = FALSE)ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [1.4s]
# largura ao longo do comprimento
par(mfrow = c(1, 4))
a <- lapply(res$contours, poly_width_at, at = "height", plot = TRUE)
par(mfrow = c(1,1))
# Counting the Kernels
seg <- image_segment(crop,
filter = 20,
index = "R-B",
col_background = "gray",
invert = TRUE)
image_index(seg, "L*")
res <-
analyze_objects_shp(seg,
ncol = 4,
threshold = "adaptive",
windowsize = 33,
tolerance = 1,
index = "L*-a",
marker = "point",
marker_col = "black",
invert = TRUE,
plot = TRUE,
upper_size = 1200)ℹ Processing a single image. Please, wait.
✔ Image Successfully analyzed! [1.9s]
ℹ Processing a single image. Please, wait.
✔ Image Successfully analyzed! [665ms]
ℹ Processing a single image. Please, wait.
✔ Image Successfully analyzed! [591ms]
ℹ Processing a single image. Please, wait.
✔ Image Successfully analyzed! [459ms]
ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [501ms]
# A correction factor will be needed here. Note that only nearly a half of the ear is analyzed
res$statistics |>
filter(stat == "n") img stat value
1 shp1 n 233
2 shp2 n 219
3 shp3 n 180
4 shp4 n 202
8.3 Fourier Descriptors
The available functions for contour analysis using Elliptical Fourier Descriptors were adapted from Claude (2088)10
The following example shows how to extract Fourier descriptors from sweet potato leaves, derived from an experiment conducted by the NEOSC group at UFSC.
img <- image_import("potato.jpg")
# Contours
cont <- object_contour(img,
index = "R",
plot = FALSE,
watershed = FALSE)
# removing the reference
plot_polygon(cont)
cont <- cont[-which(names(cont) == "9")]
plot_polygon(cont)
# Compute the Fourier descriptors
fourier <- efourier(cont, nharm = 30)
fourier_inv5 <- efourier_inv(fourier, nharm = 5)
fourier_inv10 <- efourier_inv(fourier, nharm = 10)
fourier_inv20 <- efourier_inv(fourier, nharm = 20)
# Plot the estimated contour with different harmonics
plot(img)
plot_contour(cont, col = "red", lwd = 1)
plot_contour(fourier_inv5, col = "blue", lwd = 3)
plot_contour(fourier_inv10, col = "green", lwd = 3)
plot_contour(fourier_inv20, col = "salmon", lwd = 3)
# or using the analyze_objects() function
# Contours
res <-
analyze_objects(img,
marker = "id",
watershed = FALSE,
reference = TRUE,
reference_area = 20,
efourier = TRUE,
nharm = 15,
plot = FALSE)ℹ Processing a single image. Please, wait.
✔ Image Successfully analyzed! [2.4s]
image_view(img,
object = res,
alpha = 0.3,
attribute = "solidity")ℹ Using downsample = 2 so that the number of rendered pixels approximates max_pixels.
Warning: Found less unique colors (5) than unique zcol values (7)!
Interpolating color vector to match number of zcol values.
coefs <- res$efourier_norm
pca <-
coefs |>
select(id:D15) |>
pliman::column_to_rownames("id") |>
select(-A1)
library(factoextra)
library(FactoMineR)
pcam <- PCA(pca, graph = FALSE)
fviz_pca_ind(pcam)
9 Phytopathometry
9.1 Using Color Palettes
Color palettes can be created simply by manually sampling small areas of representative images and producing a composite image representing each desired class (background, healthy tissue, and symptomatic tissue). The following image11 shows symptoms of anthracnose (Elsinoë ampelina) on grape leaves.
img <- image_import(pattern = "videira", plot = TRUE)
# putting image names into quotes "" says to pliman to search such image in the working directory
sev <- measure_disease("videira",
img_healthy = "videira_healthy",
img_symptoms = "videira_disease",
img_background = "videira_background")ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [1s]
sev$severity healthy symptomatic
1 85.09901 14.90099
9.2 Using Image Indices
grape <- img$videira.png
image_index(grape, c("B", "R", "G", "NGRDI"))
seg <- image_segment(as_image(grape@.Data[,,1:3]), "B",
fill_hull = TRUE)
sev2 <-
measure_disease("videira",
index_lb = "G",
index_dh = "NGRDI",
contour_col = "red",
opening = c(0, 5),
threshold = c("Otsu", 0),
# show_original = FALSE,
show_features = TRUE,
save_image = TRUE,
show_segmentation = TRUE,
watershed = TRUE)ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [1.2s]
sev2$severity healthy symptomatic
1 89.55889 10.44111
9.3 Batch Processing
To analyze multiple images from a directory, use the pattern argument to declare a pattern for file names. Here, 50 soybean leaves available in the repository https://osf.io/4hbr6, a database of plant disease severity annotation images, will be used. Thanks to Emerson M. Del Ponte and his collaborators for making this project publicly available. Using the save_image = TRUE argument saves the processed images in a temporary directory defined by tempdir().
# create a temporary directory
sev_batch <-
measure_disease(pattern = "soy",
dir_original = "sevsoja",
dir_processed = "sevproc",
index_lb = "B",
index_dh = "NGRDI",
threshold = c("Otsu", -0.03),
plot = FALSE,
# save_image = TRUE,
parallel = TRUE)── Parallel processing using 3 cores ───── Started on "2025-08-31 | 21:00:08" ──
ℹ Processing 50 images in parallel...
███████████████████████████████ 2% | ETA: 5m
███████████████████████████████ 6% | ETA: 4m
███████████████████████████████ 78% | ETA: 5s
███████████████████████████████ 100% | ETA: 0s
ℹ Processing 50 images in parallel...
── Processing successfully finished ──────────────── on 2025-08-31 | 21:00:29 ──
ℹ Processing 50 images in parallel...
✔ Batch processing finished [20.6s]
sev_batch$severity |>
ggplot(aes(x = symptomatic)) +
geom_histogram(bins = 8)
9.4 Leafhopper
9.5 Multiple Leaves in an Image
When multiple leaves are present in an image, the measure_disease function returns the average severity of the leaves in the image. To quantify severity per leaf, the measure_disease_byl() function can be used.
This function calculates the percentage of symptomatic leaf area using color palettes or RGB indices for each leaf (byl) in an image. This allows, for example, processing replicates of the same treatment and obtaining results for each replicate with a single image.
In the following example, images of orange leaves, kindly provided by Gabriele de Jesus, are processed.
img <- image_import("sev_leaves.jpg", plot = TRUE)
sev <-
measure_disease_byl(img,
index = "B",
index_lb = "B",
index_dh = "NGRDI")ℹ Processing a single image. Please, wait.
ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [536ms]
ℹ Processing a single image. Please, wait.
ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [594ms]
ℹ Processing a single image. Please, wait.
ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [564ms]
ℹ Processing a single image. Please, wait.
ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [549ms]
ℹ Processing a single image. Please, wait.
✔ Image Successfully analyzed! [4.2s]
sev$severity img leaf healthy symptomatic
1 img 1 59.26646 40.73354
2 img 2 59.62619 40.37381
3 img 3 60.08614 39.91386
4 img 4 57.36590 42.63410
9.6 Dose-Response Curves
The provided script deals with data analysis of a dose-response experiment to evaluate the effectiveness of different products in reducing the severity of a plant disease. The use of images is authorized by SUMITOMO-SA.
The first step is to quantify the severity within each petri dish, which represents a dose of a particular product. Subsequently, to fit the curves, the analysis is performed using the drda library in R, which is a tool for dose-response data analysis.
The script fits nonlinear regression models to dose-response data using the drda() function for each product. The specified model is a 4-parameter log-logistic regression (“ll4”).
# DOSE-RESPONSE
# Compute severity per leaf
sev <-
measure_disease_byl(pattern = "img",
index = "B",
index_dh = "NGRDI",
dir_original = "dose_response",
parallel = TRUE,
opening = c(25, 0))── Parallel processing of 14 images ────── Started at "2025-08-31 | 21:00:36" ──
ℹ Dispatching batches...
███████████████████████████████ 7% | ETA: 6m
███████████████████████████████ 100% | ETA: 0s
ℹ Dispatching batches...
✔ All batches complete! [29.2s]
res <-
map_dfr(sev, function(x){
x$severity
})
sevres <-
res |>
separate(img, into = c("img", "product", "dose"), sep = "_") |>
mutate(dose = as.numeric(str_replace_all(dose, ",", ".")),
symptomatic = symptomatic / 100)
models <-
sevres |>
group_by(product) |>
nest() |>
mutate(models = map(data,
~drda(symptomatic ~ dose,
data = .,
mean_function = "ll4"))) |> # define the model here
dplyr::select(-data)
# function to obtain the coefficients
get_results <- function(model,
resplevel = 0.5,
type = "relative"){
coefs <- coef(model) |> t()
ed <- effective_dose(model, y = resplevel) |> as.data.frame()
integ <- data.frame(nauc = nauc(model, range(model$model[[2]])))
cbind(coefs, ed, integ)
}
# Obtain the coefficients
# alpha: the value of the function at x = 0
# delta: height of the curve
# eta: the steepness (growth rate) of the curve
# phi: the x value at which the curve is equal to its mid-point
coefs <-
models |>
mutate(coefs = map_dfr(
.x = models,
.f = ~get_results(., resplevel = 0.5)) # DL50
) |>
dplyr::select(-models) |>
unnest(coefs) |>
ungroup() |>
as.data.frame()
coefs product alpha delta eta phi Estimate Lower .95
1 P1 0.3968639 -0.3864929 1.3703455 1.8071352 1.8071352 1.1849347
2 P2 0.3821491 -0.3715773 0.9932846 0.4335226 0.4335226 0.2893241
Upper .95 nauc
1 2.4293356 0.02744016
2 0.5777211 0.01948812
plot(models$models[[1]], models$models[[2]],
level = 0,
base = "10",
ylim = c(0, 0.5),
xlim = c(0, 100),
legend = c("P1", "P2"),
xlab = "Dose (ppm)",
ylab = "Disease Severity",
col = metan::ggplot_color(2),
cex = 2)
# derivative with respect to dose of the model
D(expression(alpha + delta * x^eta / (x^eta + phi^eta)), "x")delta * (x^(eta - 1) * eta)/(x^eta + phi^eta) - delta * x^eta *
(x^(eta - 1) * eta)/(x^eta + phi^eta)^2
dy <- function(x,alpha, delta, eta, phi){
delta * (x^(eta - 1) * eta)/(x^eta + phi^eta) - delta * x^eta *
(x^(eta - 1) * eta)/(x^eta + phi^eta)^2
}
# First derivative
ggplot(data.frame(x = c(0, 5)), aes(x = x)) +
pmap(coefs |> select(product:phi), function(product, alpha, delta, eta, phi) {
stat_function(fun = function(x) dy(x, alpha, delta, eta, phi),
aes(color = product),
linewidth = 1)
}) +
geom_vline(aes(xintercept = phi,
color = product),
data = coefs,
linetype = 2) +
labs(x = "Dose (ppm)",
y = "Severity Reduction Rate (% per ppm)",
color = "Product") +
ggthemes::theme_base()Warning: Removed 1 row containing missing values or values outside the scale range
(`geom_function()`).

9.7 Fungi in Petri Dishes
# fungi in petri dish
fungi <- image_import("fungi.jpg", plot = TRUE)
image_index(fungi, "L")
analyze_objects(fungi,
index = "L",
filter = 15,
watershed = FALSE,
contour_size = 3,
invert = TRUE) |>
get_measures(dpi = 90) |>
plot_measures(measure = "area",
col = "black",
size = 2)ℹ Processing a single image. Please, wait.
✔ Image Successfully analyzed! [558ms]

9.8 Bacteria
bac <- image_import("bacteria.jpg", plot = TRUE)
res <-
analyze_objects(bac,
index = "L*",
threshold = 0.3,
marker = "point")ℹ Processing a single image. Please, wait.

✔ Image Successfully analyzed! [535ms]
res$statistics stat value
1 n 1.780000e+02
2 min_area 3.000000e+00
3 mean_area 2.754494e+01
4 max_area 8.200000e+01
5 sd_area 1.467735e+01
6 sum_area 4.903000e+03
7 coverage 3.064375e-02
Footnotes
Otsu, N. 1979. Threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern SMC-9(1): 62–66. doi: 10.1109/tsmc.1979.4310076.↩︎
Source: http://gazeta-rs.com.br/as-principais-doencas-da-videira-na-primavera/#prettyPhoto↩︎
Karcher, D.E., and M.D. Richardson. 2003. Quantifying Turfgrass Color Using Digital Image Analysis. Crop Science 43(3): 943–951. doi: 10.2135/cropsci2003.9430↩︎
Claude, J. 2008. Morphometrics with R https://link.springer.com/book/10.1007/978-0-387-77789-4↩︎
Montero, RS, E. Bribiesca, R. Santiago, and E. Bribiesca. 2009. State of the Art of Compactness and Circularity Measures. International Mathematical Forum 4(27): 1305–1335.↩︎
Lee, Y., and W. Lim. 2017. Shoelace Formula: Connecting the Area of a Polygon and the Vector Cross Product. The Mathematics Teacher 110(8): 631–636. doi: 10.5951/MATHTEACHER.110.8.0631.↩︎
Montero, R.S., E. Bribiesca, R. Santiago, and E. Bribiesca. 2009. State of the Art of Compactness and Circularity Measures. International Mathematical Forum 4(27): 1305–1335↩︎
Source: http://gazeta-rs.com.br/as-principais-doencas-da-videira-na-primavera/#prettyPhoto↩︎
Karcher, D.E., and M.D. Richardson. 2003. Quantifying Turfgrass Color Using Digital Image Analysis. Crop Science 43(3): 943–951. doi: 10.2135/cropsci2003.9430↩︎
Claude, J. 2008. Morphometrics with R https://link.springer.com/book/10.1007/978-0-387-77789-4↩︎
Source: http://gazeta-rs.com.br/as-principais-doencas-da-videira-na-primavera/#prettyPhoto↩︎

